Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reformat Jenkinsfile and switch quantization to CUDA 9 #9

Conversation

marcoabreu
Copy link

No description provided.

@marcoabreu
Copy link
Author

Please don't forget to rebase

@reminisce reminisce merged commit 9a310e5 into reminisce:merge_quantization_to_master Mar 21, 2018
reminisce added a commit that referenced this pull request Mar 28, 2018
* [Quantization] 8bit Quantization and GPU Support

[Quantization] CuDNN 8bit quantized relu v0.1

[Quantization] CuDNN 8bit quantized max_pool v0.1

[Quantization] CuDNN 8bit quantized lrn v0.1

[Quantization] CuDNN 8bit quantized convolution v0.1

[Quantization] CuDNN 8bit quantized fully connected v0.1

[Quantization] Small fix

[Quantization] Implement backward method

[Quantization] Convolution backward method

[Quantization] Add range for matmul and conv

[Quantization] New types in ndarray.py

[Quantization] 8bit conv works

[Quantization] conv support multiple type

[Quantization] matmul works now

[Quantization] matmul works well

[Quantization] efactor quantization operators

[Quantization] Op: quantize_down_and_shrink_range

[Quantization] Complete quantize_graph_pass

[Quantization] Add example

[Quantization] Take zero-center quantize, accuracy fixed

[Quantization] Multiple layers MLP pass

[Quantization] Make quantized_conv same as Convolution

[Quantization] quantized_conv works

[Quantization] Fix bug

[Quantization] lenet works now

[Quantization] Add quantized_flatten

[Quantization] Quantized max pool works well

[Quantization] Make quantized_conv support NHWC

[Quantization] add max_pool

[Quantization] add ignore_symbols

[Quantization] Save change

[Quantization] Reorganize tests, 8 layers resnet works on cifar

[Quantization] Support for 'NHWC' max pool

[Quantization] Support for 'NHWC' quantized max pool

[Quantization] Fix speed of quantize_down_and_shrink_range

[Quantization] script for resnet on imagenet

[Quantization] refactor for quantize offline

[Quantization] Fix infershape

[Quantization] Update test

[Quantization] Update example

[Quantization] Fix build error

* [Quantization] Add calibration flow and refactor code

Rebase with dmlc/master

Add quantize_down_and_shrink by threshold

Don't assign resource when threshold is available for quantize_down_and_shrink

Fix quantize_down_and_shrink saturation

Implement pass for setting calib table to node attrs

Rebase with upstream master

Change threshold to min/max quantized params

Add c-api for setting calib table to graph

Add calibration front end function

Bug fixes and add unit test

Add data iter type to calibration

Fix bug in calibrate_quantized_model

Bug fix and add example

Add the second calibration approach and benchmark

Fix

Fix infer error and add benchmark for conv

Add benchmark script

Change output names and argument names

Remove commented out code

Change name

Add layout to benchmark_convolution

Remove redundant comment

Remove common and add soft link

More fix and benchmark

Add scripts to plot images

Minor fix

More fix

More fix and util tools

Tools and support bias in quantized_conv2d

Add script for getting the optimal thresholds using kl divergence

Add kl divergence for optimizing thresholds

Add benchmark scripts

Fix compile after rebasing on master

Allocate temp space only once for quantized_conv2d

Change quantize_down_and_shrink_range to allocate temp space once

No temp space for calib model

Refactor quantize_down_and_shrink_range into requantize

Refactor quantized convolution using nnvm interfaces

Fix quantized_conv bug

Use ConvolutionParam for QuantizedCuDNNConvOp

Refactor quantized fc using nnvm interfaces

Change TQuantizationNeedShrink to FNeedRequantize

Refactor quantized_pooling

Simplify FQuantizedOp interface

Better naming

Fix shape and type inference for quantized_flatten

Clean up quantization frontend APIs and examples

Delete quantized lrn and relu

Add python script for generating quantized models

Add script for running inference

Add inference example

Remove redundant files from example/quantization

Simplify user-level python APIs

Add logger

Improve user-level python api

Fix coding style

Add unit test for quantized_conv

Fix bugs in quantized_fully_connected and add unit test

Add unit test for requantize

Fix a bug and add python api unit tests

Import test_quantization in test_operator_gpu.py

Rebase with master

Remove redundant files

Fix test case for python3 and fix doc

Fix unit tests

Fix unit tests for python3

Release used ndarrays in calibration for saving memory usage

Simplify releasing memory of used ndarrays for calibration

Fix a bug

Revert "Fix a bug"

This reverts commit f7853f2.

Revert "Simplify releasing memory of used ndarrays for calibration"

This reverts commit 70b9e38.

Clean up benchmark script and improve example

Add API and example documentation and fix bugs

Remove redundant test file and improve error message

Merge quantize and dequantize with master impl

Remove commented code

Hide monitor interface from users

Remove interface from Module

Add license header

Move quantization unittests to a separate folder so that it can be only run on P3 instances

Remove quantization unittests from test_operator_gpu.py

Move quantization to contrib

Fix lint

Add mxnetlinux-gpu-p3 to jenkins

Fix jenkins

Fix CI build

Fix CI

Update jenkins file

Use cudnn7 for ci

Add docker file for quantization unit test only

Correctly skip build with cudnn < 6

Add doc for quantize symbol api

Fix lint

Fix python3 and add doc

Try to fix cudnn build problem

* Fix compile error

* Fix CI

* Remove tests that should not run on P3

* Remove unnecessary docker file

* Fix registering quantized nn ops

* Reformat Jenkinsfile and switch quantization to CUDA 9 (#9)

* Address interface change cr

* Address comments and fix bugs

* Make unit test stable

* Improve unit test

* Address cr

* Address cr

* Fix flaky unit test layer_norm

* Fix doc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants